154 research outputs found

    What drives the relevance and quality of experts' adjustment to model-based forecasts?

    Get PDF
    Experts frequently adjust statistical model-based forecasts. Sometimes this leads to higher forecast accuracy, but expert forecasts can also be dramatically worse. We explore the potential drivers of the relevance and quality of experts' added knowledge. For that purpose, we examine a very large database covering monthly forecasts for pharmaceutical products in seven categories concerning thirty-five countries. The extensive results lead to two main outcomes which are (1) that more balance between model and expert leads to more relevance of the added value of the expert and(2) that smaller-sized adjustments lead to higher quality, although sometimes very large adjustments can be beneficial too. In general, too much input of the expert leads to a deterioration of the quality of the final forecast.expert forecasts;judgemental adjustment

    Do experts' SKU forecasts improve after feedback?

    Get PDF
    We analyze the behavior of experts who quote forecasts for monthlySKU-level sales data where we compare data before and after the momentthat experts received different kinds of feedback on their behavior. Wehave data for 21 experts located in as many countries who make SKUlevelforecasts for a variety of pharmaceutical products for October 2006to September 2007. We study the behavior of the experts by comparingtheir forecasts with those from an automated statistical program, and wereport the forecast accuracy over these 12 months. In September 2007these experts were given feedback on their behavior and they received atraining at the headquartersÂ’ office, where specific attention was given tothe ins and outs of the statistical program. Next, we study the behaviorof the experts for the 3 months after the training session, that is, October2007 to December 2007. Our main conclusion is that in the second periodthe expertsÂ’ forecasts deviated less from the statistical forecasts and thattheir accuracy improved substantially.expert forecasts;model forecasts;cognitive process feedback;judgmental adjustment;outcome feedback;performance feedback;task properties feedback

    Do experts incorporate statistical model forecasts and should they?

    Get PDF
    Experts can rely on statistical model forecasts when creating their own forecasts.Usually it is not known what experts actually do. In this paper we focus on threequestions, which we try to answer given the availability of expert forecasts andmodel forecasts. First, is the expert forecast related to the model forecast andhow? Second, how is this potential relation influenced by other factors? Third,how does this relation influence forecast accuracy?We propose a new and innovative two-level Hierarchical Bayes model to answerthese questions. We apply our proposed methodology to a large data set offorecasts and realizations of SKU-level sales data from a pharmaceutical company.We find that expert forecasts can depend on model forecasts in a variety ofways. Average sales levels, sales volatility, and the forecast horizon influence thisdependence. We also demonstrate that theoretical implications of expert behavioron forecast accuracy are reflected in the empirical data.endogeneity;Bayesian analysis;expert forecasts;model forecasts;forecast adjustment

    Evaluating Econometric Models and Expert Intuition

    Get PDF
    This thesis is about forecasting situations which involve econometric models and expert intuition. The first three chapters are about what it is that experts do when they adjust statistical model forecasts and what might improve that adjustment behavior. It is investigated how expert forecasts are related to model forecasts, how this potential relation is influenced by other factors and how it influences forecast accuracy, how feedback influences forecasting behavior and accuracy and which loss function is associated with experts’ forecasts. The final chapter focuses on how to make use in an optimal way of multiple forecasts produced by multiple experts for one and the same event. It is found that potential disagreement amongst forecasters can have predictive value, especially when used in Markov regime-switching models

    Expert opinion versus expertise in forecasting

    Get PDF
    Expert opinion is an opinion given by an expert, and it can have significant value in forecasting key policy variables in economics and finance. Expert forecasts can either be expert opinions, or forecasts based on an econometric model. An expert forecast that is based on an econometric model is replicable, and can be defined as a replicable expert forecast (REF), whereas an expert opinion that is not based on an econometric model can be defined as a non-replicable expert forecast (Non-REF). Both replicable and non-replicable expert forecasts may be made available by an expert regarding a policy variable of interest. In this paper we develop a model to generate replicable expert forecasts, and compare REF with Non-REF. A method is presented to compare REF and Non-REF using efficient estimation methods, and a direct test of expertise on expert opinion is given. The latter serves the purpose of investigating whether expert adjustment improves the model-based forecasts. Illustrations for forecasting pharmaceutical SKUs, where the econometric model is of (variations of) the ARIMA type, show the relevance of the new methodology proposed in the paper. In particular, experts possess significant expertise, and expert forecasts are significant in explaining actual sales.forecasts;efficient estimation;generated regressors;direct test;expert opinion;non-replicable expert forecast;replicable expert

    A Manager's Perspective on Combining Expert and Model-based Forecasts

    Get PDF
    We study the performance of sales forecasts which linearly combine model-based forecasts and expert forecasts. Using a unique and very large database containing monthly model-based forecasts for many pharmaceutical products and forecasts given by thirty-seven different experts, we document that a combination almost always is most accurate. When correlating the specific weights in these "best" linear combinations with experts' experience and behaviour, we find that more experience is beneficial for forecasts for nearby horizons. And, when the rate of bracketing increases the relative weights converge to a 50%-50% distribution, when there is some slight variation across forecasts horizons.combining forecasts;experts forecast;model-based forecasts

    Do experts' SKU forecasts improve after feedback?

    Get PDF
    We analyze the behavior of experts who quote forecasts for monthly SKU-level sales data where we compare data before and after the moment that experts received different kinds of feedback on their behavior. We have data for 21 experts located in as many countries who make SKUlevel forecasts for a variety of pharmaceutical products for October 2006 to September 2007. We study the behavior of the experts by comparing their forecasts with those from an automated statistical program, and we report the forecast accuracy over these 12 months. In September 2007 these experts were given feedback on their behavior and they received a training at the headquarters' office, where specific attention was giv

    Experts' adjustment to model-based SKU-level forecasts: Does the forecast horizon matter?

    Get PDF
    Experts (managers) may have domain-specific knowledge that is not included in a statistical model and that can improve short-run and long-run forecasts of SKU-level sales data. While one-step-ahead forecasts address the conditional mean of the variable, model-based forecasts for longer horizons have a tendency to convert to the unconditional mean of a time series variable. Analyzing a large database concerning pharmaceutical sales forecasts for various products and adjusted by a range of experts, we examine whether the forecast horizon has an impact on what experts do and on how good they are once they adjust model-based forecasts. For this, we use regression-based methods and we obtain five innovative results. First, all horizons experience managerial intervention of forecasts. Second, the horizon that is most relevant to the managers shows greater overweighting of the expert adjustment. Third, for all horizons the expert adjusted forecasts have less accuracy than pure model-based forecasts, with distant horizons having the least deterioration. Fourth, when expert-adjusted for

    Combining SKU-level sales forecasts from models and experts

    Get PDF
    We study the performance of SKU-level sales forecasts which linearly combine statistical model forecasts and expert forecasts. Using a large and unique database containing model forecasts for monthly sales of various pharmaceutical products and forecasts given by about fifty experts, we document that a linear combination of those forecasts usually is most accurate. Corre

    Do Experts incorporate Statistical Model Forecasts and should they?

    Get PDF
    Experts can rely on statistical model forecasts when creating their own forecasts. Usually it is not known what experts actually do. In this paper we focus on three questions, which we try to answer given the availability of expert forecasts and model forecasts. First, is the expert forecast related to the model forecast and how? Second, how is this potential relation influenced by other factors? Third, how does this relation influence forecast accuracy? We propose a new and innovative two-level Hierarchical Bayes model to answer these questions. We apply our proposed methodology to a large data set of forecasts and realizations of SKU-level sales data from a pharmaceutical company. We find that expert forecasts can depend on model forecasts in a variety of ways. Average sales levels, sales volatility, and the forecast horizon influence this dependence. We also demonstrate that theoretical implications of expert behavior on forecast accuracy are reflected in the empirical data
    • …
    corecore